1,037 research outputs found

    Online Maximum k-Coverage

    Get PDF
    We study an online model for the maximum k-vertex-coverage problem, where given a graph G = (V,E) and an integer k, we ask for a subset A ⊆ V, such that |A | = k and the number of edges covered by A is maximized. In our model, at each step i, a new vertex vi is revealed, and we have to decide whether we will keep it or discard it. At any time of the process, only k vertices can be kept in memory; if at some point the current solution already contains k vertices, any inclusion of any new vertex in the solution must entail the irremediable deletion of one vertex of the current solution (a vertex not kept when revealed is irremediably deleted). We propose algorithms for several natural classes of graphs (mainly regular and bipartite), improving on an easy 1/2-competitive ratio. We next settle a set-version of the problem, called maximum k-(set)-coverage problem. For this problem we present an algorithm that improves upon former results for the same model for small and moderate values of k

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Computational Difficulty of Global Variations in the Density Matrix Renormalization Group

    Full text link
    The density matrix renormalization group (DMRG) approach is arguably the most successful method to numerically find ground states of quantum spin chains. It amounts to iteratively locally optimizing matrix-product states, aiming at better and better approximating the true ground state. To date, both a proof of convergence to the globally best approximation and an assessment of its complexity are lacking. Here we establish a result on the computational complexity of an approximation with matrix-product states: The surprising result is that when one globally optimizes over several sites of local Hamiltonians, avoiding local optima, one encounters in the worst case a computationally difficult NP-hard problem (hard even in approximation). The proof exploits a novel way of relating it to binary quadratic programming. We discuss intriguing ramifications on the difficulty of describing quantum many-body systems.Comment: 5 pages, 1 figure, RevTeX, final versio

    Finding Connected Dense kk-Subgraphs

    Full text link
    Given a connected graph GG on nn vertices and a positive integer knk\le n, a subgraph of GG on kk vertices is called a kk-subgraph in GG. We design combinatorial approximation algorithms for finding a connected kk-subgraph in GG such that its density is at least a factor Ω(max{n2/5,k2/n2})\Omega(\max\{n^{-2/5},k^2/n^2\}) of the density of the densest kk-subgraph in GG (which is not necessarily connected). These particularly provide the first non-trivial approximations for the densest connected kk-subgraph problem on general graphs

    On Fast and Robust Information Spreading in the Vertex-Congest Model

    Full text link
    This paper initiates the study of the impact of failures on the fundamental problem of \emph{information spreading} in the Vertex-Congest model, in which in every round, each of the nn nodes sends the same O(logn)O(\log{n})-bit message to all of its neighbors. Our contribution to coping with failures is twofold. First, we prove that the randomized algorithm which chooses uniformly at random the next message to forward is slow, requiring Ω(n/k)\Omega(n/\sqrt{k}) rounds on some graphs, which we denote by Gn,kG_{n,k}, where kk is the vertex-connectivity. Second, we design a randomized algorithm that makes dynamic message choices, with probabilities that change over the execution. We prove that for Gn,kG_{n,k} it requires only a near-optimal number of O(nlog3n/k)O(n\log^3{n}/k) rounds, despite a rate of q=O(k/nlog3n)q=O(k/n\log^3{n}) failures per round. Our technique of choosing probabilities that change according to the execution is of independent interest.Comment: Appears in SIROCCO 2015 conferenc

    Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation

    Full text link
    Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.Comment: Accepted at MICCAI 201

    Quantum Interactive Proofs with Competing Provers

    Full text link
    This paper studies quantum refereed games, which are quantum interactive proof systems with two competing provers: one that tries to convince the verifier to accept and the other that tries to convince the verifier to reject. We prove that every language having an ordinary quantum interactive proof system also has a quantum refereed game in which the verifier exchanges just one round of messages with each prover. A key part of our proof is the fact that there exists a single quantum measurement that reliably distinguishes between mixed states chosen arbitrarily from disjoint convex sets having large minimal trace distance from one another. We also show how to reduce the probability of error for some classes of quantum refereed games.Comment: 13 pages, to appear in STACS 200

    Sampling and Representation Complexity of Revenue Maximization

    Full text link
    We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities

    Parallel Repetition of Entangled Games with Exponential Decay via the Superposed Information Cost

    Get PDF
    In a two-player game, two cooperating but non communicating players, Alice and Bob, receive inputs taken from a probability distribution. Each of them produces an output and they win the game if they satisfy some predicate on their inputs/outputs. The entangled value ω(G)\omega^*(G) of a game GG is the maximum probability that Alice and Bob can win the game if they are allowed to share an entangled state prior to receiving their inputs. The nn-fold parallel repetition GnG^n of GG consists of nn instances of GG where the players receive all the inputs at the same time and produce all the outputs at the same time. They win GnG^n if they win each instance of GG. In this paper we show that for any game GG such that ω(G)=1ε<1\omega^*(G) = 1 - \varepsilon < 1, ω(Gn)\omega^*(G^n) decreases exponentially in nn. First, for any game GG on the uniform distribution, we show that ω(Gn)=(1ε2)Ω(nlog(IO)log(ε))\omega^*(G^n) = (1 - \varepsilon^2)^{\Omega\left(\frac{n}{\log(|I||O|)} - |\log(\varepsilon)|\right)}, where I|I| and O|O| are the sizes of the input and output sets. From this result, we show that for any entangled game GG, ω(Gn)(1ε2)Ω(nQlog(IO)log(ε)Q)\omega^*(G^n) \le (1 - \varepsilon^2)^{\Omega(\frac{n}{Q\log(|I||O|)} - \frac{|\log(\varepsilon)|}{Q})} where pp is the input distribution of GG and Q=I2maxxypxy2minxypxyQ= \frac{|I|^2 \max_{xy} p_{xy}^2 }{\min_{xy} p_{xy} }. This implies parallel repetition with exponential decay as long as minxy{pxy}0\min_{xy} \{p_{xy}\} \neq 0 for general games. To prove this parallel repetition, we introduce the concept of \emph{Superposed Information Cost} for entangled games which is inspired from the information cost used in communication complexity.Comment: In the first version of this paper we presented a different, stronger Corollary 1 but due to an error in the proof we had to modify it in the second version. This third version is a minor update. We correct some typos and re-introduce a proof accidentally commented out in the second versio

    Algorithmic and Hardness Results for the Colorful Components Problems

    Full text link
    In this paper we investigate the colorful components framework, motivated by applications emerging from comparative genomics. The general goal is to remove a collection of edges from an undirected vertex-colored graph GG such that in the resulting graph GG' all the connected components are colorful (i.e., any two vertices of the same color belong to different connected components). We want GG' to optimize an objective function, the selection of this function being specific to each problem in the framework. We analyze three objective functions, and thus, three different problems, which are believed to be relevant for the biological applications: minimizing the number of singleton vertices, maximizing the number of edges in the transitive closure, and minimizing the number of connected components. Our main result is a polynomial time algorithm for the first problem. This result disproves the conjecture of Zheng et al. that the problem is NP NP-hard (assuming PNPP \neq NP). Then, we show that the second problem is APX APX-hard, thus proving and strengthening the conjecture of Zheng et al. that the problem is NP NP-hard. Finally, we show that the third problem does not admit polynomial time approximation within a factor of V1/14ϵ|V|^{1/14 - \epsilon} for any ϵ>0\epsilon > 0, assuming PNPP \neq NP (or within a factor of V1/2ϵ|V|^{1/2 - \epsilon}, assuming ZPPNPZPP \neq NP).Comment: 18 pages, 3 figure
    corecore